- 
                Notifications
    You must be signed in to change notification settings 
- Fork 357
Add inplace quantizer examples #2345
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| 🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2345
 Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 62483bc with merge base 7e7ea92 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. | 
| This pull request was exported from Phabricator. Differential Revision: D76312488 | 
| This pull request was exported from Phabricator. Differential Revision: D76312488 | 
2779ddb    to
    6e1edbb      
    Compare
  
    Summary: Pull Request resolved: pytorch#2345 Add a quantizer example for in place ops Rollback Plan: Differential Revision: D76312488
| This pull request was exported from Phabricator. Differential Revision: D76312488 | 
6e1edbb    to
    1a54331      
    Compare
  
    Summary: Pull Request resolved: pytorch#2345 Add a quantizer example for in place ops Rollback Plan: Differential Revision: D76312488
| This pull request was exported from Phabricator. Differential Revision: D76312488 | 
1a54331    to
    52dbbc6      
    Compare
  
    Summary: Pull Request resolved: pytorch#2345 Add a quantizer example for in place ops, and add a patch to the constant fold pass such that the mutable buffer won't be folded Differential Revision: D76312488
| # Identify mutable buffers by finding copy_ operations | ||
| self.mutable_buffers = self._find_mutable_buffers() | ||
|  | ||
| def _find_mutable_buffers(self) -> set[torch.fx.Node]: | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this change needed? is there a test that exercises this code path?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, the added test will fail if we have this code path. The quantize_per_tensor_default will be folded together with the mutable buffer, which is not what we want
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
makes sense, thanks!
| This pull request was exported from Phabricator. Differential Revision: D76312488 | 
Summary: Pull Request resolved: pytorch#2345 Add a quantizer example for in place ops, and add a patch to the constant fold pass such that the mutable buffer won't be folded Reviewed By: jerryzh168 Differential Revision: D76312488
52dbbc6    to
    7c1133d      
    Compare
  
    | This pull request was exported from Phabricator. Differential Revision: D76312488 | 
Summary: Pull Request resolved: pytorch#2345 Add a quantizer example for in place ops, and add a patch to the constant fold pass such that the mutable buffer won't be folded Reviewed By: jerryzh168 Differential Revision: D76312488
7c1133d    to
    83395a6      
    Compare
  
    | This pull request was exported from Phabricator. Differential Revision: D76312488 | 
Summary: Pull Request resolved: pytorch#2345 Add a quantizer example for in place ops, and add a patch to the constant fold pass such that the mutable buffer won't be folded Reviewed By: jerryzh168 Differential Revision: D76312488
83395a6    to
    e4e84dd      
    Compare
  
    | This pull request was exported from Phabricator. Differential Revision: D76312488 | 
Summary: Pull Request resolved: pytorch#2345 Add a quantizer example for in place ops, and add a patch to the constant fold pass such that the mutable buffer won't be folded Reviewed By: jerryzh168 Differential Revision: D76312488
e4e84dd    to
    f67507c      
    Compare
  
    | This pull request was exported from Phabricator. Differential Revision: D76312488 | 
Summary: Pull Request resolved: pytorch#2345 Add a quantizer example for in place ops, and add a patch to the constant fold pass such that the mutable buffer won't be folded Reviewed By: jerryzh168 Differential Revision: D76312488
f67507c    to
    0498b7d      
    Compare
  
    Summary: Similar to pytorch/ao#2345 During constant folding, we shouldn't fold the mutable buffers. In the pass, we will find out the mutable buffer first, and then skip them during folding Test Plan: The added unit test test_constant_folding_mutable_buffer Rollback Plan: Differential Revision: D76844103
| This pull request was exported from Phabricator. Differential Revision: D76312488 | 
Summary: Pull Request resolved: pytorch#2345 Add a quantizer example for in place ops, and add a patch to the constant fold pass such that the mutable buffer won't be folded Reviewed By: jerryzh168 Differential Revision: D76312488
0498b7d    to
    5f38f5c      
    Compare
  
    Summary: Pull Request resolved: pytorch#2345 Add a quantizer example for in place ops, and add a patch to the constant fold pass such that the mutable buffer won't be folded Reviewed By: jerryzh168 Differential Revision: D76312488
| This pull request was exported from Phabricator. Differential Revision: D76312488 | 
5f38f5c    to
    62483bc      
    Compare
  
    Differential Revision: D76312488 Pull Request resolved: #2345
Summary:
Add a quantizer example for in place ops
Rollback Plan:
Differential Revision: D76312488